$N$개의 독립 샘플 $x_{1:N}$ 이 있는 경우, $$ L(\theta ; x_{1:N}) = P(x_{1:N};\theta) = \prod_{i=1}^N \theta^{x_i} (1 - \theta)^{1-x_i} $$
Log-Likelihood $$ \begin{eqnarray*} \log L &=& \log P(x_{1:N};\theta) \\ &=& \sum_{i=1}^N \big\{ {x_i} \log\theta + (1-x_i)\log(1 - \theta) \big\} \\ &=& \sum_{i=1}^N {x_i} \log\theta + \left( N-\sum_{i=1}^N x_i \right) \log( 1 - \theta ) \\ \end{eqnarray*} $$
따라서 Log-Likelihood는 $$ \begin{eqnarray*} \log L &=& N_1 \log\theta + (N-N_1) \log(1 - \theta) \\ \end{eqnarray*} $$
Log-Likelihood Derivative
In [5]:
np.random.seed(0)
theta0 = 0.6
x = sp.stats.bernoulli(theta0).rvs(1000)
N0, N1 = np.bincount(x, minlength=2)
N = N0 + N1
theta = N1/N
theta
Out[5]:
$N$개의 독립 샘플 $x_{1:N}$ 이 있는 경우, $$ L(\theta ; x_{1:N}) = P(x_{1:N};\theta) = \prod_{i=1}^N \prod_{k=1}^K \theta_k^{x_{i,k}} $$
Log-Likelihood $$ \begin{eqnarray*} \log L &=& \log P(x_{1:N};\theta) \\ &=& \sum_{i=1}^N \sum_{k=1}^K {x_{i,k}} \log\theta_k \\ &=& \sum_{k=1}^K \log\theta_k \sum_{i=1}^N {x_{i,k}} \end{eqnarray*} $$
따라서 Log-Likelihood는 $$ \begin{eqnarray*} \log L &=& \sum_{k=1}^K \log\theta_k N_k \end{eqnarray*} $$
추가 조건 $$ \sum_{k=1}^K \theta_k = 1 $$
In [6]:
np.random.seed(0)
theta0 = np.array([0.1, 0.3, 0.6])
x = np.random.choice(np.arange(3), 1000, p=theta0)
N0, N1, N2 = np.bincount(x, minlength=3)
N = N0 + N1 + N2
theta = np.array([N0, N1, N2]) / N
theta
Out[6]:
$N$개의 독립 샘플 $x_{1:N}$ 이 있는 경우, $$ L(\theta;x_{1:N}) = p(x_{1:N};\theta) = \prod_{i=1}^N \dfrac{1}{\sqrt{2\pi\sigma^2}} \exp \left(-\dfrac{(x_i-\mu)^2}{2\sigma^2}\right)$$
Log-Likelihood $$ \begin{eqnarray*} \log L &=& \log p(x_{1:N};\theta) \\ &=& \sum_{i=1}^N \left\{ -\dfrac{1}{2}\log(2\pi\sigma^2) - \dfrac{(x_i-\mu)^2}{2\sigma^2} \right\} \\ &=& -\dfrac{N}{2} \log(2\pi\sigma^2) - \dfrac{1}{2\sigma^2}\sum_{i=1}^N (x_i-\mu)^2 \end{eqnarray*} $$
Log-Likelihood Derivative
In [7]:
np.random.seed(0)
mu0 = 1
sigma0 = 2
x = sp.stats.norm(mu0, sigma0).rvs(1000)
xbar = x.mean()
s2 = x.std(ddof=1)
xbar, s2
Out[7]:
$N$개의 독립 샘플 $x_{1:N}$ 이 있는 경우, $$ L(\theta;x_{1:N}) = p(x_{1:N};\theta) = \prod_{i=1}^N \dfrac{1}{(2\pi)^{D/2} |\Sigma|^{1/2}} \exp \left( -\dfrac{1}{2} (x_i-\mu)^T \Sigma^{-1} (x_i-\mu) \right)$$
Log-Likelihood $$ \begin{eqnarray*} \log L &=& \log P(x_{1:N};\theta) \\ &=& \sum_{i=1}^N \left\{ -\log((2\pi)^{D/2} |\Sigma|^{1/2}) - \dfrac{1}{2} (x-\mu)^T \Sigma^{-1} (x-\mu) \right\} \\ &=& C -\dfrac{N}{2} \log|\Sigma| - \dfrac{1}{2} \sum (x-\mu)^T \Sigma^{-1} (x-\mu) \end{eqnarray*} $$
In [8]:
np.random.seed(0)
mu0 = np.array([0, 1])
sigma0 = np.array([[1, 0.2], [0.2, 4]])
x = sp.stats.multivariate_normal(mu0, sigma0).rvs(1000)
xbar = x.mean(axis=0)
S2 = np.cov(x, rowvar=0)
print(xbar)
print(S2)